extremist post
Assessing Large Language Models for Online Extremism Research: Identification, Explanation, and New Knowledge
Dong, Beidi, Lee, Jin R., Zhu, Ziwei, Srinivasan, Balassubramanian
The United States has experienced a significant increase in violent extremism, prompting the need for automated tools to detect and limit the spread of extremist ideology online. This study evaluates the performance of Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformers (GPT) in detecting and classifying online domestic extremist posts. We collected social media posts containing "far-right" and "far-left" ideological keywords and manually labeled them as extremist or non-extremist. Extremist posts were further classified into one or more of five contributing elements of extremism based on a working definitional framework. The BERT model's performance was evaluated based on training data size and knowledge transfer between categories. We also compared the performance of GPT 3.5 and GPT 4 models using different prompts: na\"ive, layperson-definition, role-playing, and professional-definition. Results showed that the best performing GPT models outperformed the best performing BERT models, with more detailed prompts generally yielding better results. However, overly complex prompts may impair performance. Different versions of GPT have unique sensitives to what they consider extremist. GPT 3.5 performed better at classifying far-left extremist posts, while GPT 4 performed better at classifying far-right extremist posts. Large language models, represented by GPT models, hold significant potential for online extremism classification tasks, surpassing traditional BERT models in a zero-shot setting. Future research should explore human-computer interactions in optimizing GPT models for extremist detection and classification tasks to develop more efficient (e.g., quicker, less effort) and effective (e.g., fewer errors or mistakes) methods for identifying extremist content.
- Europe > Germany (0.14)
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- North America > United States > Washington > King County > Bellevue (0.04)
- (8 more...)
- Media (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Law Enforcement & Public Safety > Terrorism (1.00)
- (7 more...)
Facebook Will Use Artificial Intelligence to Find Extremist Posts
Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said Thursday that it would begin using artificial intelligence to help remove inappropriate content. Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Monika Bickert, the head of global policy management at Facebook. One of the first applications for the technology is identifying content that clearly violates Facebook's terms of use, such as photos and videos of beheadings or other gruesome images, and stopping users from uploading them to the site. "Tragically, we have seen more terror attacks recently," Ms. Bickert said.
- Europe > United Kingdom (0.17)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Europe > Netherlands > South Holland > The Hague (0.06)
- Law Enforcement & Public Safety > Terrorism (1.00)
- Information Technology > Services (1.00)
Facebook Will Use Artificial Intelligence to Find Extremist Posts
Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said Thursday that it would begin using artificial intelligence to help remove inappropriate content. Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Monika Bickert, the head of global policy management at Facebook. One of the first applications for the technology is identifying content that clearly violates Facebook's terms of use, such as photos and videos of beheadings or other gruesome images, and stopping users from uploading them to the site. "Tragically, we have seen more terror attacks recently," Ms. Bickert said.